36 research outputs found

    DESIGN OF TWO STAGE BULK-DRIVEN OPERATIONAL TRANSCONDUCTANCE AMPLIFIER (OTA) WITH A HIGH GAIN FOR LOW VOLTAGE APPLICATION

    Get PDF
    An Operational Transconductance Amplifier (further abbreviated as OTA) is a voltage controlled current source used to produce an output current proportional to the input voltage. A schematic architecture for a 180nm OTA is presented in this thesis with the goal of improving the open-loop gain for a 0.9V supply voltage with a rail-to-rail bulk-driven input stage. Results show an open loop gain 97.14 dB with a power consumption of 3.33uW. An OTA with over 90 dB open loop gain and lower power consumption is highly suitable for low-voltage applications. The slew rate of the OTA is 0.05V/uS with a unity-gain bandwidth of 8.4MHz. A 10uA ideal bias current reference is utilized for the design. The phase margin is around 49.2 degrees. The threshold voltage for a 180nm N-channel Metal Oxide Semiconductor (also known as NMOS) device is around 400mV which restricts the low voltage applications in most amplifier circuits. The fourth terminal (bulk) of the MOS device is utilized to optimize the voltage headroom (Vds). The bulk terminal uses a much lesser source to drain voltage than the gate-driven transistors, and the transistors remain ON with an input voltage as low as 0.1V. A bulk-driven input stage ensures the amplification in the subthreshold region (input signal less than the threshold voltage of the MOS device). However, even with the bulk input MOS device, a rail-to-rail input stage is employed to improve the dynamic range for the input signal from 0V to 0.9V with a supply voltage of 0.9V. The fluctuation in open loop gain concerning the change in input signal in the published research is because of the constant instability in the intrinsic transconductance of the input devices. A possible solution is presented in this thesis by adding a second dominant pole to the circuit (i.e., second stage for the OTA), which reduces the dependency of intrinsic transconductance (bulk-driven device) on the total open loop gain of the amplifier. Thus, a significant gain of 97.14 dB with minimal fluctuations is achieved. Furthermore, adding a second stage improves the gain by distributing the dependency of the gain due to the first stage to both poles in the circuit. Hence, the problem of fluctuating transconductance of the input stage is resolved by the constant intrinsic transconductance of the MOS near the second pole (M19). To improve the gain, a folded cascoded amplifier connected with the input stage results in a better impedance (in the first stage) known as the gain stage. In the second stage, a large PMOS common source amplifier gives a good output current compared to the input stage to enhance the output swing and drive a purely capacitive load of 0.5pF. Furthermore, a miller capacitance is used to compensate for the frequency between the first and the second stage and improving the unity-gain bandwidth. An additional biasing circuit in the second stage amplifies the current output of the first stage and thus improving the slew rate of the entire device. In addition, the biasing circuit resolves the biasing issues for the second-stage common-source amplifier. It improves the output swing of the device to obtain a clean/undistorted output waveform. All the simulations are carried out in the LTSpice simulation tool to test the waveforms and bode plot for open loop gain and phase margin (49.2 degrees) at different processes (slow, typical, and fast), input voltages (0-0.9V), supply voltage (0.8V, 0.9V, 1.0V) and temperatures (-10 to 100 degree C)

    Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense Disambiguation

    Get PDF
    The success of deep learning methods hinges on the availability of large training datasets annotated for the task of interest. In contrast to human intelligence, these methods lack versatility and struggle to learn and adapt quickly to new tasks, where labeled data is scarce. Meta-learning aims to solve this problem by training a model on a large number of few-shot tasks, with an objective to learn new tasks quickly from a small number of examples. In this paper, we propose a meta-learning framework for few-shot word sense disambiguation (WSD), where the goal is to learn to disambiguate unseen words from only a few labeled instances. Meta-learning approaches have so far been typically tested in an NN-way, KK-shot classification setting where each task has NN classes with KK examples per class. Owing to its nature, WSD deviates from this controlled setup and requires the models to handle a large number of highly unbalanced classes. We extend several popular meta-learning approaches to this scenario, and analyze their strengths and weaknesses in this new challenging setting.Comment: Added additional experiment

    Neural Character-based Composition Models for Abuse Detection

    Get PDF
    The advent of social media in recent years has fed into some highly undesirable phenomena such as proliferation of offensive language, hate speech, sexist remarks, etc. on the Internet. In light of this, there have been several efforts to automate the detection and moderation of such abusive content. However, deliberate obfuscation of words by users to evade detection poses a serious challenge to the effectiveness of these efforts. The current state of the art approaches to abusive language detection, based on recurrent neural networks, do not explicitly address this problem and resort to a generic OOV (out of vocabulary) embedding for unseen words. However, in using a single embedding for all unseen words we lose the ability to distinguish between obfuscated and non-obfuscated or rare words. In this paper, we address this problem by designing a model that can compose embeddings for unseen words. We experimentally demonstrate that our approach significantly advances the current state of the art in abuse detection on datasets from two different domains, namely Twitter and Wikipedia talk page.Comment: In Proceedings of the EMNLP Workshop on Abusive Language Online 201

    Joint Modelling of Emotion and Abusive Language Detection

    Get PDF
    The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online. Aiming to tackle this problem, the natural language processing (NLP) community has experimented with a range of techniques for abuse detection. While achieving substantial success, these methods have so far only focused on modelling the linguistic properties of the comments and the online communities of users, disregarding the emotional state of the users and how this might affect their language. The latter is, however, inextricably linked to abusive behaviour. In this paper, we present the first joint model of emotion and abusive language detection, experimenting in a multi-task learning framework that allows one task to inform the other. Our results demonstrate that incorporating affective features leads to significant improvements in abuse detection performance across datasets.Comment: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 202

    Investigating the Robustness of Sequential Recommender Systems Against Training Data Perturbations: an Empirical Study

    Full text link
    Sequential Recommender Systems (SRSs) have been widely used to model user behavior over time, but their robustness in the face of perturbations to training data is a critical issue. In this paper, we conduct an empirical study to investigate the effects of removing items at different positions within a temporally ordered sequence. We evaluate two different SRS models on multiple datasets, measuring their performance using Normalized Discounted Cumulative Gain (NDCG) and Rank Sensitivity List metrics. Our results demonstrate that removing items at the end of the sequence significantly impacts performance, with NDCG decreasing up to 60\%, while removing items from the beginning or middle has no significant effect. These findings highlight the importance of considering the position of the perturbed items in the training data and shall inform the design of more robust SRSs

    Scientific and Creative Analogies in Pretrained Language Models

    Full text link
    This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2. Existing analogy datasets typically focus on a limited set of analogical relations, with a high similarity of the two domains between which the analogy holds. As a more realistic setup, we introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains. Using this dataset, we test the analogical reasoning capabilities of several widely-used pretrained language models (LMs). We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.Comment: To be published in Findings of EMNLP 202
    corecore